Skip to main content
Version: Spectra Analyze 9.2.0

Submissions API

Submit samples for analysis

POST /api/uploads/

The Submissions API accepts a file or a URL to be analyzed. Only one file or URL can be submitted in a single request. For additional workflows, such as scanning entire directories, see our SDK documentation and its cookbook.

After submitting a file or URL, users can check the status of individual submissions by:

If the user submits a URL which links to a single file, the file is downloaded to the appliance in its original format (for example: PDF, EXE, RTF…). If the user submits a link to a website, the appliance uses a simple web crawler that retrieves the content up to 1 level from the submitted URL (including links to other domains). The scraped content is downloaded to the appliance in a single ZIP file.

By default, submitted files and content downloaded from URLs are analyzed with the Spectra Core static analysis engine built into Spectra Analyze. Additional analysis options depend on the configuration of the appliance (such as the enabled dynamic analysis services), and on the optional parameters provided by the user in the request.

The duration of the analysis depends on the number of submitted files (or downloaded from the submitted URL), their sizes and file types, as well as the amount of files extracted during analysis, which are also analyzed separately. The timeout for URL submissions is 45 minutes.

Supported analysis / crawling options

For sample submissions: Spectra Intelligence

To ensure that the sample receives a classification status other than Unknown (not classified), it is recommended to include the optional analysis parameter with the cloud value in the request. This will send the sample to Spectra Intelligence for scanning, which does not happen by default.

The prerequisite for this is a properly configured Spectra Intelligence account on the appliance.

For URL submissions: Local / Spectra Intelligence

Depending on which crawling method parameter is provided, submitted URLs will be treated differently.

  • The local crawling method will treat the URL as any other locally submitted file, as a private submission. The contents of the URL will be crawled and downloaded directly. This method can be used without a Spectra Intelligence account.
  • The cloud crawling method is more reliable when working in restricted network conditions and ensures fewer failed URL analyses, but all submitted URLs and downloaded files will be treated as public, and will be visible and accessible to all Spectra Intelligence users. The prerequisite for this is a properly configured Spectra Intelligence account on the appliance.

Dynamic Analysis Services

Samples with supported file types can be automatically submitted to dynamic analysis services after they are uploaded to the appliance. This depends on the configuration of each dynamic analysis service on the appliance.

For the CAPE Sandbox service, a sample can be automatically submitted if:

  • its file type matches the file types supported by and configured for the CAPE integration
  • it has not been analyzed by CAPE before, or the Submit only distinct files to CAPE option is disabled in the CAPE integration configuration dialog. If that option is disabled, all samples of supported file types can be submitted to CAPE for analysis. If that option is enabled, only the samples that have never been analyzed with CAPE can be submitted for analysis. If a file has been previously uploaded to Spectra Analyze and already analyzed with CAPE, it will not be automatically sent for analysis when uploaded again. However, it can be manually submitted for reanalysis.

Submission restrictions

The same privacy and file size restrictions apply when submitting files via this API as when submitting them from the Spectra Analyze GUI.

The maximum allowed size of data to download from submitted URLs can be configured by the appliance administrator. By default, it is limited to 200 MB. This refers to all data downloaded from a URL, not just to the size of a single file.

By default, download requests from the appliance to a submitted URL time out after 300 seconds (5 minutes), and each download is reattempted 3 times. These restrictions are configurable by appliance administrators in the Administration ‣ Configuration ‣ URL Analysis dialog.

Request Parameters

NAMEREQUIREDDESCRIPTIONTYPE
fileRequired if submitting a file, mutually exclusive with urlContents of the file to upload to the appliance for analysis. Only one file can be submitted in a single request. If this parameter is used in the request, the url parameter cannot be used in the same request.body, file
urlRequired if submitting a URL, mutually exclusive with fileURL from which the appliance should download the data and submit it for analysis. Only one URL can be submitted in a single request. If this parameter is used in the request, the file parameter cannot be used in the same request. The URL must be valid and include the protocol. Supported protocols are HTTP and HTTPS.form, string
analysisOptional, works only with fileComma-separated list of analysis types the sample should be queued for. Supported values: cloud.form, string
crawlerOptional, works only with urlDefines the crawling method that will be used to crawl the submitted URL. Supported values: local, cloud. The default crawling method is cloud. If omitting the parameter from their submission requests, users should contact their appliance administrator to check if this setting has been changed.form, string
archive_passwordOptionalWhen submitting a password-protected archive, this parameter can be used to provide the password required to unpack it. If this parameter is provided in the request, the ZIP file must contain only one file. Upon successful extraction, only the extracted file will be uploaded, and the archive discarded. If the extracted file type is supported by the ReversingLabs Cloud Sandbox, and automatic uploading of files is enabled, it will be forwarded for dynamic analysis.form, string
rl_cloud_sandbox_platformOptionalThe platform to be used when executing the sample on the ReversingLabs Cloud Sandbox. Supported values: windows7, windows10, windows11, macos_11, ubuntu_20. Including this parameter will forward the sample to the ReversingLabs Cloud Sandbox even if automatic submission of samples is disabled.form, string
commentOptional, works only with fileOptional comment to add to the uploaded sample. If added, the comment will be visible on the Sample Details > Discussion page. Supports the following HTML tags: p, br, a, strong, b, em and i. These tags are rendered in the UI.form, string
filenameOptional, works only with fileExplicitly set a filename for the uploaded file. If not provided, the default filename or SHA1 hash of the uploaded file is used as its name on the appliance.form, string
tagsOptional, works only with fileOne or more User Tags to assign to the uploaded sample. If adding multiple tags, they should be comma-separated. Supported characters for tags are [a-z0-9][a-z0-9 /._-]. Tags are case-sensitive and distinguish spaces from underscores. In other words, Example and example are two separate tags; test_tag and test tag are separate tags, too. The total length of the submitted value(s) for this parameter must not exceed 42 characters.form, string

Response Format

Response Examples

Response for a submitted file

{
"code": 201,
"message": "Done.",
"detail": {
"id": 1234,
"sha1": "988881adc9fc3655077dc2d4d757d480b5ea0e11",
"user": 1,
"created": "2015-06-18T12:01:15.123456Z",
"filename": "test.txt",
"href": "/?q=988881adc9fc3655077dc2d4d757d480b5ea0e11"
}
}

Response for a submitted URL

{
"code": 201,
"message": "Done.",
"detail": {
"id": 22912,
"user": 1,
"created": "2020-05-13T10:07:07.933860Z",
"filename": "https://www.example.com",
}
}

Response Fields

FIELD NAMEDESCRIPTIONTYPE
codeStatus response codeinteger
messageInformative message describing the status of the requeststring
detailAdditional detailsobject
FIELD NAMEDESCRIPTIONTYPE
idIdentification number of the submission processing task on the appliance. For URL submissions, this value should be used with the Check status of submitted URLs endpoint to check their status.integer
sha1SHA1 hash of the submitted file. This field is not returned for submitted URLs.string
userIdentification number of the Spectra Analyze appliance user who submitted the file or URL.integer
createdTimestamp indicating the date and time when the processing task was created on the appliance.string
filenameAutomatically assigned filename for the submitted file or URL. For file submissions, this can be overrriden by explicitly specifying the filename in the request with the optional filename parameter.string
hrefURL fragment that can be appended to the appliance hostname to access the sample analysis report directly on the appliance. This field is not returned for submitted URLs.string

Response Status Codes

CODEDESCRIPTION
201Sample or URL successfully submitted for analysis.
400Bad Request. Validation error. Parameters in the request are invalid or missing.
403Forbidden. Authentication credentials are invalid or missing.
405Not Allowed. It is not possible to upload samples while the appliance is in maintenance mode.
413Request Entity Too Large. File exceeds the maximum configured file size.
429Too Many Requests. This happens if system resources are depleted. This can mean that the system is using 90% or more of its available memory, or that it holds more than 50 items in the processing queue. The precise values are configurable (either on the Administration > Configuration > Resource Usage Limits page or via Spectra Detect Manager). Alternatively, this response may indicate that the Spectra Intelligence quota has been exceeded.
503The appliance storage space is close to being full. When there is less than 25% of free space on the disk, it is not possible to upload new samples until disk space is freed up.

Submitting a file

Request Examples

cURL

# Add --insecure before the URL if you are using a self-signed SSL certificate
curl -X POST 'https://appliance.example.com/api/uploads/' \
--header 'Authorization: Token exampletoken' \
--form 'file=@test.exe'
# Add --insecure before the URL if you are using a self-signed SSL certificate
curl -X POST 'https://a1000.example.com/api/uploads/' \
--header 'Authorization: Token exampletoken' \
--form 'file=@test.exe' \
--form 'filename=custom-filename.exe' \
--form 'analysis=cloud' \
--form 'tags=tag1,tag2,tag3' \
--form 'comment=Custom comment'

Python

import requests

# Change the token
token = "exampletoken"

# Change the hostname in the URL
url = "https://appliance.example.com/api/uploads/"

files=[
('file', ('test.exe', open('test.exe','rb'),'application/octet-stream'))
]

headers = {
"Authorization": f"Token {token}"
}

# Add verify=False in the request if you are using a self-signed SSL certificate
response = requests.post(url, headers=headers, files=files)

print(response.text)
import requests

# Change the token
token = "exampletoken"
# Change the hostname in the URL
url = "https://a1000.example.com/api/uploads/"

payload= {
"filename": "custom-filename.txt",
"analysis": "cloud",
"tags": "tag1,tag2,tag3",
"comment": "This is a custom comment"
}
files=[
('file',('test.exe',open('test.exe','rb'),'application/octet-stream'))
]
headers = {"Authorization": f"Token {token}"}

# Add verify=False in the request if you are using a self-signed SSL certificate
response = requests.post(url, headers=headers, data=payload, files=files)

print(response.text)

For additional workflows, such as scanning entire directories, see our SDK documentation and its cookbook.

Submitting a URL

Request Examples

cURL

# Add --insecure before the URL if you are using a self-signed SSL certificate
curl -X POST 'https://appliance.example.com/api/uploads/' \
--header 'Authorization: Token exampletoken' \
--form 'url=http://www.example.com'

Python

import requests

# Change the token
token = "exampletoken"

# Change the hostname in the URL
url = "https://appliance.example.com/api/uploads/"

payload = {'url': 'https://example.com'}

headers = {
"Authorization": f"Token {token}"
}

# Add verify=False in the request if you are using a self-signed SSL certificate
response = requests.post(url, headers=headers, data=payload)

print(response.text)